Goto

Collaborating Authors

 comparison study


Reviews: Temporal Regularization for Markov Decision Process

Neural Information Processing Systems

This paper is very interesting. One previous assumption in TD learning is that reward are close with states in proximity of the state space, which has been pointed out by many papers is not realistic and have problems for spatial value function regularization. Instead, this paper make the assumption that rewards are close for states. Overall this paper has a very good motivation, and the literature review shows that the author is knowledgable of this field. This paper could open a novel area of temporal regularization that received inadequate attention before.


Learning Hamiltonian neural Koopman operator and simultaneously sustaining and discovering conservation law

Zhang, Jingdong, Zhu, Qunxi, Lin, Wei

arXiv.org Artificial Intelligence

MOE Frontiers Center for Brain Science and State Key Laboratory of Medical Neurobiology, Fudan University, Shanghai 200032, China (Dated: June 5, 2024) Accurately finding and predicting dynamics based on the observational data with noise perturbations is of paramount significance but still a major challenge presently. Here, for the Hamiltonian mechanics, we propose the Hamiltonian Neural Koopman Operator (HNKO), integrating the knowledge of mathematical physics in learning the Koopman operator, and making it automatically sustain and even discover the conservation laws. We demonstrate the outperformance of the HNKO and its extension using a number of representative physical systems even with hundreds or thousands of freedoms. Our results suggest that feeding the prior knowledge of the underlying system and the mathematical theory appropriately to the learning framework can reinforce the capability of machine learning in solving physical problems. Although progresses have been learning and the Koopman operator theory, we outstandingly achieved, these frameworks, which either articulate a framework to efficiently and robustly learn enlarge the network complexity or overfit the noisy data the Hamiltonian dynamics based solely on the observational during the training stage to decrease the loss, are suffered data even with noise perturbations.


Exploring the use of a Large Language Model for data extraction in systematic reviews: a rapid feasibility study

Schmidt, Lena, Hair, Kaitlyn, Graziozi, Sergio, Campbell, Fiona, Kapp, Claudia, Khanteymoori, Alireza, Craig, Dawn, Engelbert, Mark, Thomas, James

arXiv.org Artificial Intelligence

This paper describes a rapid feasibility study of using GPT-4, a large language model (LLM), to (semi)automate data extraction in systematic reviews. Despite the recent surge of interest in LLMs there is still a lack of understanding of how to design LLM-based automation tools and how to robustly evaluate their performance. During the 2023 Evidence Synthesis Hackathon we conducted two feasibility studies. Firstly, to automatically extract study characteristics from human clinical, animal, and social science domain studies. We used two studies from each category for prompt-development; and ten for evaluation. Secondly, we used the LLM to predict Participants, Interventions, Controls and Outcomes (PICOs) labelled within 100 abstracts in the EBM-NLP dataset. Overall, results indicated an accuracy of around 80%, with some variability between domains (82% for human clinical, 80% for animal, and 72% for studies of human social sciences). Causal inference methods and study design were the data extraction items with the most errors. In the PICO study, participants and intervention/control showed high accuracy (>80%), outcomes were more challenging. Evaluation was done manually; scoring methods such as BLEU and ROUGE showed limited value. We observed variability in the LLMs predictions and changes in response quality. This paper presents a template for future evaluations of LLMs in the context of data extraction for systematic review automation. Our results show that there might be value in using LLMs, for example as second or third reviewers. However, caution is advised when integrating models such as GPT-4 into tools. Further research on stability and reliability in practical settings is warranted for each type of data that is processed by the LLM.


Health Disparities through Generative AI Models: A Comparison Study Using A Domain Specific large language model

Bautista, Yohn Jairo Parra, Lima, Vinicious, Theran, Carlos, Alo, Richard

arXiv.org Artificial Intelligence

Health disparities are differences in health outcomes and access to healthcare between different groups, including racial and ethnic minorities, low-income people, and rural residents. An artificial intelligence (AI) program called large language models (LLMs) can understand and generate human language, improving health communication and reducing health disparities. There are many challenges in using LLMs in human-doctor interaction, including the need for diverse and representative data, privacy concerns, and collaboration between healthcare providers and technology experts. We introduce the comparative investigation of domain-specific large language models such as SciBERT with a multi-purpose LLMs BERT. We used cosine similarity to analyze text queries about health disparities in exam rooms when factors such as race are used alone. Using text queries, SciBERT fails when it doesn't differentiate between queries text: "race" alone and "perpetuates health disparities." We believe clinicians can use generative AI to create a draft response when communicating asynchronously with patients. However, careful attention must be paid to ensure they are developed and implemented ethically and equitably.


A Comparison Study of Deep CNN Architecture in Detecting of Pneumonia

Porag, Al Mohidur Rahman, Hasan, Md. Mahedi, Ahad, Dr. Md Taimur

arXiv.org Artificial Intelligence

Pneumonia, a respiratory infection brought on by bacteria or viruses, affects a large number of people, especially in developing and impoverished countries where high levels of pollution, unclean living conditions, and overcrowding are frequently observed, along with insufficient medical infrastructure. Pleural effusion, a condition in which fluids fill the lung and complicate breathing, is brought on by pneumonia. Early detection of pneumonia is essential for ensuring curative care and boosting survival rates. The approach most usually used to diagnose pneumonia is chest X-ray imaging. The purpose of this work is to develop a method for the automatic diagnosis of bacterial and viral pneumonia in digital x-ray pictures. This article first presents the authors' technique, and then gives a comprehensive report on recent developments in the field of reliable diagnosis of pneumonia. In this study, here tuned a state-of-the-art deep convolutional neural network to classify plant diseases based on images and tested its performance. Deep learning architecture is compared empirically. VGG19, ResNet with 152v2, Resnext101, Seresnet152, Mobilenettv2, and DenseNet with 201 layers are among the architectures tested. Experiment data consists of two groups, sick and healthy X-ray pictures. To take appropriate action against plant diseases as soon as possible, rapid disease identification models are preferred. DenseNet201 has shown no overfitting or performance degradation in our experiments, and its accuracy tends to increase as the number of epochs increases. Further, DenseNet201 achieves state-of-the-art performance with a significantly a smaller number of parameters and within a reasonable computing time. This architecture outperforms the competition in terms of testing accuracy, scoring 95%. Each architecture was trained using Keras, using Theano as the backend.


Prediction approaches for partly missing multi-omics covariate data: A literature review and an empirical comparison study

Hornung, Roman, Ludwigs, Frederik, Hagenberg, Jonas, Boulesteix, Anne-Laure

arXiv.org Artificial Intelligence

The generation of various types of omics data is becoming increasingly rapid and cost-effective. As a consequence, there are more so-called multi-omics data becoming available, that is, high-dimensional molecular data of several types such as genomic, transcriptomic, or proteomic data measured for the same patients. In the last few years, several approaches to use these data for patient outcome prediction have been developed (see Hornung and Wright (2019) for an extensive literature review). Nevertheless, doubts have recently emerged as to whether there is benefit to using multi-omics data over simple clinical models (Herrmann et al., 2020). Regardless of their usefulness for prediction, multi-omics data from different sources that are used for the same prediction problem, for various reasons, often do not feature the exact same types of data. Most importantly, the data for which predictions should be obtained, that is, the test data, often do not contain the same data types as the data available for obtaining the prediction rule, that is, the training data (Krautenbacher et al., 2019). The training data is also frequently composed of subsets originating from different sources (e.g.


Plant species richness prediction from DESIS hyperspectral data: A comparison study on feature extraction procedures and regression models

Guo, Yiqing, Mokany, Karel, Ong, Cindy, Moghadam, Peyman, Ferrier, Simon, Levick, Shaun R.

arXiv.org Artificial Intelligence

The diversity of terrestrial vascular plants plays a key role in maintaining the stability and productivity of ecosystems. Monitoring species compositional diversity across large spatial scales is challenging and time consuming. The advanced spectral and spatial specification of the recently launched DESIS (the DLR Earth Sensing Imaging Spectrometer) instrument provides a unique opportunity to test the potential for monitoring plant species diversity with spaceborne hyperspectral data. This study provides a quantitative assessment on the ability of DESIS hyperspectral data for predicting plant species richness in two different habitat types in southeast Australia. Spectral features were first extracted from the DESIS spectra, then regressed against on-ground estimates of plant species richness, with a two-fold cross validation scheme to assess the predictive performance. We tested and compared the effectiveness of Principal Component Analysis (PCA), Canonical Correlation Analysis (CCA), and Partial Least Squares analysis (PLS) for feature extraction, and Kernel Ridge Regression (KRR), Gaussian Process Regression (GPR), Random Forest Regression (RFR) for species richness prediction. The best prediction results were r=0.76 and RMSE=5.89 for the Southern Tablelands region, and r=0.68 and RMSE=5.95 for the Snowy Mountains region. Relative importance analysis for the DESIS spectral bands showed that the red-edge, red, and blue spectral regions were more important for predicting plant species richness than the green bands and the near-infrared bands beyond red-edge. We also found that the DESIS hyperspectral data performed better than Sentinel-2 multispectral data in the prediction of plant species richness. Our results provide a quantitative reference for future studies exploring the potential of spaceborne hyperspectral data for plant biodiversity mapping.


On the role of benchmarking data sets and simulations in method comparison studies

Friedrich, Sarah, Friede, Tim

arXiv.org Machine Learning

Method comparisons are essential to provide recommendations and guidance for applied researchers, who often have to choose from a plethora of available approaches. While many comparisons exist in the literature, these are often not neutral but favour a novel method. Apart from the choice of design and a proper reporting of the findings, there are different approaches concerning the underlying data for such method comparison studies. Most manuscripts on statistical methodology rely on simulation studies and provide a single real-world data set as an example to motivate and illustrate the methodology investigated. In the context of supervised learning, in contrast, methods are often evaluated using so-called benchmarking data sets, i.e. real-world data that serve as gold standard in the community. Simulation studies, on the other hand, are much less common in this context. The aim of this paper is to investigate differences and similarities between these approaches, to discuss their advantages and disadvantages and ultimately to develop new approaches to the evaluation of methods picking the best of both worlds. To this aim, we borrow ideas from different contexts such as mixed methods research and Clinical Scenario Evaluation.


New Perspectives on the Use of Online Learning for Congestion Level Prediction over Traffic Data

Manibardo, Eric L., Laña, Ibai, Lobo, Jesus L., Del Ser, Javier

arXiv.org Machine Learning

This work focuses on classification over time series data. When a time series is generated by non-stationary phenomena, the pattern relating the series with the class to be predicted may evolve over time (concept drift). Consequently, predictive models aimed to learn this pattern may become eventually obsolete, hence failing to sustain performance levels of practical use. To overcome this model degradation, online learning methods incrementally learn from new data samples arriving over time, and accommodate eventual changes along the data stream by implementing assorted concept drift strategies. In this manuscript we elaborate on the suitability of online learning methods to predict the road congestion level based on traffic speed time series data. We draw interesting insights on the performance degradation when the forecasting horizon is increased. As opposed to what is done in most literature, we provide evidence of the importance of assessing the distribution of classes over time before designing and tuning the learning model. This previous exercise may give a hint of the predictability of the different congestion levels under target. Experimental results are discussed over real traffic speed data captured by inductive loops deployed over Seattle (USA). Several online learning methods are analyzed, from traditional incremental learning algorithms to more elaborated deep learning models. As shown by the reported results, when increasing the prediction horizon, the performance of all models degrade severely due to the distribution of classes along time, which supports our claim about the importance of analyzing this distribution prior to the design of the model.


A Plea for Neutral Comparison Studies in Computational Sciences

Boulesteix, Anne-Laure, Eugster, Manuel J. A.

arXiv.org Machine Learning

In a context where most published articles are devoted to the development of "new methods", comparison studies are generally appreciated by readers but surprisingly given poor consideration by many scientific journals. In connection with recent articles on over-optimism and epistemology published in Bioinformatics, this letter stresses the importance of neutral comparison studies for the objective evaluation of existing methods and the establishment of standards by drawing parallels with clinical research.